47 research outputs found

    Development, Problem Behavior, and Quality of Life in a Population Based Sample of Eight-Year-Old Children with Down Syndrome

    Get PDF
    OBJECTIVE: Children with Down syndrome (DS) have delayed psychomotor development. We investigated levels of development, problem behavior, and Health-Related Quality of Life (HRQoL) in a population sample of Dutch eight-year-old children with DS. Developmental outcomes were compared with normative data of eight-year-old children from the general population. METHOD: Over a three-year-period all parents with an eight-year-old child with DS were approached by the national parent organization. Developmental skills were assessed by means of the McCarthy Scales of Children's Ability. To measure emotional and behavioral problems we used the Child Behavior Checklist. HRQoL was assessed with the TNO-AZL Children's Quality of Life questionnaire. Analyses of variance were applied to compare groups. RESULTS: A total of 337 children participated. Mean developmental age was substantially lower than mean calendar age (3.9 years, SD 0.87 and 8.1 years, SD 0.15 respectively). Mean developmental age was significantly lower among boys than girls (3.6 (SD 0.85) and 4.2 years (SD 0.82) respectively; p<0.001). Compared with the general population, children with DS had more emotional and behavioral problems (p<0.001). However on the anxious/depressed scale, they scored significantly more favorably (p<0.001). Significantly lower HRQoL scores for the scales gross motor skills, autonomy, social functioning and cognitive functioning were found (p-values<0.001). Hardly any differences were observed for the scales physical complaints, positive and negative emotions. CONCLUSION: Eight-year-old children with DS have an average developmental delay of four years, more often have emotional and behavioral problems, and have a less favorable HRQoL compared with children from the general population

    Cross-Species Comparison of Genes Related to Nutrient Sensing Mechanisms Expressed along the Intestine

    Get PDF
    Introduction Intestinal chemosensory receptors and transporters are able to detect food-derived molecules and are involved in the modulation of gut hormone release. Gut hormones play an important role in the regulation of food intake and the control of gastrointestinal functioning. This mechanism is often referred to as “nutrient sensing”. Knowledge of the distribution of chemosensors along the intestinal tract is important to gain insight in nutrient detection and sensing, both pivotal processes for the regulation of food intake. However, most knowledge is derived from rodents, whereas studies in man and pig are limited, and cross-species comparisons are lacking. Aim To characterize and compare intestinal expression patterns of genes related to nutrient sensing in mice, pigs and humans. Methods Mucosal biopsy samples taken at six locations in human intestine (n = 40) were analyzed by qPCR. Intestinal scrapings from 14 locations in pigs (n = 6) and from 10 locations in mice (n = 4) were analyzed by qPCR and microarray, respectively. The gene expression of glucagon, cholecystokinin, peptide YY, glucagon-like peptide-1 receptor, taste receptor T1R3, sodium/glucose cotransporter, peptide transporter-1, GPR120, taste receptor T1R1, GPR119 and GPR93 was investigated. Partial least squares (PLS) modeling was used to compare the intestinal expression pattern between the three species. Results and conclusion The studied genes were found to display specific expression patterns along the intestinal tract. PLS analysis showed a high similarity between human, pig and mouse in the expression of genes related to nutrient sensing in the distal ileum, and between human and pig in the colon. The gene expression pattern was most deviating between the species in the proximal intestine. Our results give new insights in interspecies similarities and provide new leads for translational research and models aiming to modulate food intake processes in man

    Complex speech-language therapy interventions for stroke-related aphasia: The RELEASE study incorporating a systematic review and individual participant data network meta-analysis

    Get PDF
    Background: People with language problems following stroke (aphasia) benefit from speech and language therapy. Optimising speech and language therapy for aphasia recovery is a research priority. Objectives: The objectives were to explore patterns and predictors of language and communication recovery, optimum speech and language therapy intervention provision, and whether or not effectiveness varies by participant subgroup or language domain. Design: This research comprised a systematic review, a meta-analysis and a network meta-analysis of individual participant data. Setting: Participant data were collected in research and clinical settings. Interventions: The intervention under investigation was speech and language therapy for aphasia after stroke. Main outcome measures: The main outcome measures were absolute changes in language scores from baseline on overall language ability, auditory comprehension, spoken language, reading comprehension, writing and functional communication. Data sources and participants: Electronic databases were systematically searched, including MEDLINE, EMBASE, Cumulative Index to Nursing and Allied Health Literature, Linguistic and Language Behavior Abstracts and SpeechBITE (searched from inception to 2015). The results were screened for eligibility, and published and unpublished data sets (randomised controlled trials, non-randomised controlled trials, cohort studies, case series, registries) with at least 10 individual participant data reporting aphasia duration and severity were identified. Existing collaborators and primary researchers named in identified records were invited to contribute electronic data sets. Individual participant data in the public domain were extracted. Review methods: Data on demographics, speech and language therapy interventions, outcomes and quality criteria were independently extracted by two reviewers, or available as individual participant data data sets. Meta-analysis and network meta-analysis were used to generate hypotheses. Results: We retrieved 5928 individual participant data from 174 data sets across 28 countries, comprising 75 electronic (3940 individual participant data), 47 randomised controlled trial (1778 individual participant data) and 91 speech and language therapy intervention (2746 individual participant data) data sets. The median participant age was 63 years (interquartile range 53-72 years). We identified 53 unavailable, but potentially eligible, randomised controlled trials (46 of these appeared to include speech and language therapy). Relevant individual participant data were filtered into each analysis. Statistically significant predictors of recovery included age (functional communication, individual participant data: 532, n = 14 randomised controlled trials) and sex (overall language ability, individual participant data: 482, n = 11 randomised controlled trials; functional communication, individual participant data: 532, n = 14 randomised controlled trials). Older age and being a longer time since aphasia onset predicted poorer recovery. A negative relationship between baseline severity score and change from baseline (p < 0.0001) may reflect the reduced improvement possible from high baseline scores. The frequency, duration, intensity and dosage of speech and language therapy were variously associated with auditory comprehension, naming and functional communication recovery. There were insufficient data to examine spontaneous recovery. The greatest overall gains in language ability [14.95 points (95% confidence interval 8.7 to 21.2 points) on the Western Aphasia Battery-Aphasia Quotient] and functional communication [0.78 points (95% confidence interval 0.48 to 1.1 points) on the Aachen Aphasia Test-Spontaneous Communication] were associated with receiving speech and language therapy 4 to 5 days weekly; for auditory comprehension [5.86 points (95% confidence interval 1.6 to 10.0 points) on the Aachen Aphasia Test-Token Test], the greatest gains were associated with receiving speech and language therapy 3 to 4 days weekly. The greatest overall gains in language ability [15.9 points (95% confidence interval 8.0 to 23.6 points) on the Western Aphasia Battery-Aphasia Quotient] and functional communication [0.77 points (95% confidence interval 0.36 to 1.2 points) on the Aachen Aphasia Test-Spontaneous Communication] were associated with speech and language therapy participation from 2 to 4 (and more than 9) hours weekly, whereas the highest auditory comprehension gains [7.3 points (95% confidence interval 4.1 to 10.5 points) on the Aachen Aphasia Test-Token Test] were associated with speech and language therapy participation in excess of 9 hours weekly (with similar gains notes for 4 hours weekly). While clinically similar gains were made alongside different speech and language therapy intensities, the greatest overall gains in language ability [18.37 points (95% confidence interval 10.58 to 26.16 points) on the Western Aphasia Battery-Aphasia Quotient] and auditory comprehension [5.23 points (95% confidence interval 1.51 to 8.95 points) on the Aachen Aphasia Test-Token Test] were associated with 20-50 hours of speech and language therapy. Network meta-analyses on naming and the duration of speech and language therapy interventions across language outcomes were unstable. Relative variance was acceptable (< 30%). Subgroups may benefit from specific interventions. Limitations: Data sets were graded as being at a low risk of bias but were predominantly based on highly selected research participants, assessments and interventions, thereby limiting generalisability. Conclusions: Frequency, intensity and dosage were associated with language gains from baseline, but varied by domain and subgroup

    Do physician outcome judgments and judgment biases contribute to inappropriate use of treatments? Study protocol

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>There are many examples of physicians using treatments inappropriately, despite clear evidence about the circumstances under which the benefits of such treatments outweigh their harms. When such over- or under- use of treatments occurs for common diseases, the burden to the healthcare system and risks to patients can be substantial. We propose that a major contributor to inappropriate treatment may be how clinicians judge the likelihood of important treatment outcomes, and how these judgments influence their treatment decisions. The current study will examine the role of judged outcome probabilities and other cognitive factors in the context of two clinical treatment decisions: 1) prescription of antibiotics for sore throat, where we hypothesize overestimation of benefit and underestimation of harm leads to over-prescription of antibiotics; and 2) initiation of anticoagulation for patients with atrial fibrillation (AF), where we hypothesize that underestimation of benefit and overestimation of harm leads to under-prescription of warfarin.</p> <p>Methods</p> <p>For each of the two conditions, we will administer surveys of two types (Type 1 and Type 2) to different samples of Canadian physicians. The primary goal of the Type 1 survey is to assess physicians' perceived outcome probabilities (both good and bad outcomes) for the target treatment. Type 1 surveys will assess judged outcome probabilities in the context of a representative patient, and include questions about how physicians currently treat such cases, the recollection of rare or vivid outcomes, as well as practice and demographic details. The primary goal of the Type 2 surveys is to measure the specific factors that drive individual clinical judgments and treatment decisions, using a 'clinical judgment analysis' or 'lens modeling' approach. This survey will manipulate eight clinical variables across a series of sixteen realistic case vignettes. Based on the survey responses, we will be able to identify which variables have the greatest effect on physician judgments, and whether judgments are affected by inappropriate cues or incorrect weighting of appropriate cues. We will send antibiotics surveys to family physicians (300 per survey), and warfarin surveys to both family physicians and internal medicine specialists (300 per group per survey), for a total of 1,800 physicians. Each Type 1 survey will be two to four pages in length and take about fifteen minutes to complete, while each Type 2 survey will be eight to ten pages in length and take about thirty minutes to complete.</p> <p>Discussion</p> <p>This work will provide insight into the extent to which clinicians' judgments about the likelihood of important treatment outcomes explain inappropriate treatment decisions. This work will also provide information necessary for the development of an individualized feedback tool designed to improve treatment decisions. The techniques developed here have the potential to be applicable to a wide range of clinical areas where inappropriate utilization stems from biased judgments.</p

    Inherited determinants of Crohn's disease and ulcerative colitis phenotypes: a genetic association study

    Get PDF
    Crohn's disease and ulcerative colitis are the two major forms of inflammatory bowel disease; treatment strategies have historically been determined by this binary categorisation. Genetic studies have identified 163 susceptibility loci for inflammatory bowel disease, mostly shared between Crohn's disease and ulcerative colitis. We undertook the largest genotype association study, to date, in widely used clinical subphenotypes of inflammatory bowel disease with the goal of further understanding the biological relations between diseases

    The new EuroSCORE II does not improve prediction of mortality in high-risk patients undergoing cardiac surgery: a collaborative analysis of two European centres

    No full text
    Prediction of operative risk in adult patients undergoing cardiac surgery remains a challenge, particularly in high-risk patients. In Europe, the EuroSCORE is the most commonly used risk-prediction model, but is no longer accurately calibrated to be used in contemporary practice. The new EuroSCORE II was recently published in an attempt to improve risk prediction. We sought to assess the predictive value of EuroSCORE II compared with the original EuroSCOREs in high-risk patients. Patients who underwent surgery between 1 April 2006 and 31 March 2011 with a preoperative logistic EuroSCORE >= 10 were identified from prospective cardiac surgical databases at two European institutions. Additional variables included in EuroSCORE II, but not in the original EuroSCORE, were retrospectively collected through patient chart review. The C-statistic to predict in-hospital mortality was calculated for the additive EuroSCORE, logistic EuroSCORE and EuroSCORE II models. The Hosmer-Lemes A total of 933 patients were identified; the median additive EuroSCORE was 10 (interquartile range [IQR] 9-11), median logistic EuroSCORE 15.3 (IQR 12.0-24.1) and median EuroSCORE II 9.3 (5.8-15.6). There were 90 (9.7%) in-hospital deaths. None of the EuroSCORE models performed well with a C-statistic of 0.67 for the additive EuroSCORE and EuroSCORE II, and 0.66 for the logistic EuroSCORE. Model calibration was poor for the EuroSCORE II (chi-square 16.5; P = 0.035). Both the additive EuroSCORE a The new EuroSCORE II does not improve risk prediction in high-risk patients undergoing adult cardiac surgery when compared with original additive and logistic EuroSCOREs. The key problem of risk stratification in high-risk patients has not been addressed by this new model. Future iterations of the score should explore more advanced statistical methods and focus on developing procedure-specific algorithms. Moreover, models that predict complications in addition to mortality may prove to be of in

    Discussion: Theory of Mind

    No full text

    How orthodox protestant parents decide on the vaccination of their children: a qualitative study.

    Get PDF
    Contains fulltext : 108154.pdf (publisher's version ) (Open Access)ABSTRACT: BACKGROUND: Despite high vaccination coverage, there have recently been epidemics of vaccine preventable diseases in the Netherlands, largely confined to an orthodox protestant minority with religious objections to vaccination. The orthodox protestant minority consists of various denominations with either low, intermediate or high vaccination coverage. All orthodox protestant denominations leave the final decision to vaccinate or not up to their individual members. METHODS: To gain insight into how orthodox protestant parents decide on vaccination, what arguments they use, and the consequences of their decisions, we conducted an in-depth interview study of both vaccinating and non-vaccinating orthodox protestant parents selected via purposeful sampling. The interviews were thematically coded by two analysts using the software program Atlas.ti. The initial coding results were reviewed, discussed, and refined by the analysts until consensus was reached. Emerging concepts were assessed for consistency using the constant comparative method from grounded theory. RESULTS: After 27 interviews, data saturation was reached. Based on characteristics of the decision-making process (tradition vs. deliberation) and outcome (vaccinate or not), 4 subgroups of parents could be distinguished: traditionally non-vaccinating parents, deliberately non-vaccinating parents, deliberately vaccinating parents, and traditionally vaccinating parents. Except for the traditionally vaccinating parents, all used predominantly religious arguments to justify their vaccination decisions. Also with the exception of the traditionally vaccinating parents, all reported facing fears that they had made the wrong decision. This fear was most tangible among the deliberately vaccinating parents who thought they might be punished immediately by God for vaccinating their children and interpreted any side effects as a sign to stop vaccinating. CONCLUSIONS: Policy makers and health care professionals should stimulate orthodox protestant parents to make a deliberate vaccination choice but also realize that a deliberate choice does not necessarily mean a choice to vaccinate

    Including Health Economic Analysis in Pilot Studies: Lessons learned from a cost-utility analysis within the PROSPECTIV pilot study

    Get PDF
    Purpose To assess feasibility and health economic benefits and costs as part of a pilot study for a nurse-led, psychoeducational intervention (NPLI) for prostate cancer in order to understand the potential for cost effectiveness as well as contribute to the design of a larger scale trial. Methods Men with stable prostate cancer post-treatment were recruited from two cancer centres in the UK. Eighty-three men were randomised to the NLPI plus usual care or usual care alone (UCA) (42 NLPI and 41 UCA); the NLPI plus usual care was delivered in the primary-care setting (the intervention) and included an initial face-to-face consultation with a trained nurse, with follow-up tailored to individual needs. The study afforded the opportunity to undertake a short-term within pilot analysis. The primary outcome measure for the economic evaluation was quality of life, as measured by the EuroQol five dimensions questionnaire (EQ-5D) (EQ-5D-5L) instrument. Costs (£2014) assessed included health-service resource use, out-of-pocket expenses and losses from inability to undertake usual activities. Results Total and incremental costs varied across the different scenarios assessed, with mean cost differences ranging from £173 to £346; incremental effect, as measured by the change in utility scores over the duration of follow-up, exhibited wide confidence intervals highlighting inconclusive effectiveness (95% CI: -0.0226; 0.0438). The cost per patient of delivery of the intervention would be reduced if rolled out to a larger patient cohort. Conclusions The NLPI is potentially cost saving depending on the scale of delivery; however, the results presented are not considered generalisable
    corecore